Heading to Orlando for SQL Saturday again this year

Comments 0

Share to social media

Last year, I attended SQL Saturday Orlando again after missing a few years (I did my first database design precon there a few years back). I love the way they run their event, and as speakers, we did some fun stuff like being the ones that served lunch to the attendees. This year, I wasn’t sure if I had a chance to go, because I had scheduled a family vacation in early October to Disney World, having no idea when their event might be. When they announced the date, I realized it was a win-win-win situation for me, as my original vacation plans were from Oct 3-7, and their event was on the 10th. Two more days at Disney, AND some SQL learning? Chance to hang out with Andy, Karla, Bradley, and Kendall (not to mention the other speakers) Sold!

The session I will be doing is the same one I have been doing at the past few events:

“How In-Memory Database Objects Affect Database Design

With SQL Server 2014, Microsoft has added a major new feature to help optimize OLTP database implementations by persisting your data primarily in RAM. Of course it isn’t that simple, internally everything that uses this new feature is completely new. While the internals of this feature may be foreign to you, accessing the data that uses the structures very much resembles T-SQL as you already know it. As such, the first important question for the average developer will be how to adapt an existing application to make use of the technology to achieve enhanced performance. In this session, I will start with a normalized database, and adapt the logical and physical database model/implementation in several manners, performance testing the tables and code changes along the way.”

And it will be the last time I do this session in its current form, since SQL Server 2016 is coming out next year, and the changes it is making is going to make a tremendous difference to the conclusions I will make when I rework it. However, the basic material in the session won’t change, as the logical implications of the In-Memory architecture will not change significantly, just some major features like the different constraint types make the use cases for In-Memory explode because some types of data protection that were possible in code using a pessimistic (lock based) concurrency model are completely impossible using an optimistic (version based) concurrency model, and how collisions are handled are really quite different as well.

Next year, along with an update to this presentation, I plan on making a session completely on the different concurrency models and collisions as well because the differences in concurrency models are the primary differences in how you will need to tailor your code.

Load comments

About the author

Louis Davidson

See Profile

Louis is the former editor of Simple-Talk. Prior to that, has was a corporate database developer and data architect for a non-profit organization for 25 years! Louis has been a Microsoft MVP since 2004, and is the author of a series of SQL Server Database Design books, most recently Pro SQL Server Relational Database Design and Implementation.